feat: add modular resource fetcher adapters for Expo and bare React Native#759
feat: add modular resource fetcher adapters for Expo and bare React Native#759rizalibnu wants to merge 38 commits intosoftware-mansion:mainfrom
Conversation
Add modular resource fetcher adapters to support both Expo and bare React Native environments. ## New Packages ### @rn-executorch/expo-adapter - Expo-based resource fetcher using expo-file-system - Supports asset bundles, local files, and remote downloads - Download management with pause/resume/cancel capabilities ### @rn-executorch/bare-adapter - Bare React Native resource fetcher using RNFS and background downloader - Supports all platform-specific file operations - Background download support with proper lifecycle management ## Core Changes - Refactor ResourceFetcher to use adapter pattern - Add initExecutorch() and cleanupExecutorch() for adapter management - Export adapter interfaces and utilities - Update LLM controller to support new resource fetching ## App Updates - Update computer-vision, llm, speech-to-text, text-embeddings apps - Add adapter initialization to each app - Update dependencies to use workspace packages
Add a complete bare React Native example app demonstrating LLM integration with react-native-executorch. ## App: llm_bare ### Features - Simple chat UI for LLM interactions - Model loading with progress indicator - Real-time streaming responses - Send/stop generation controls - Auto-scrolling message history ### Stack - **Framework**: React Native 0.81.5 (bare/CLI) - **LLM**: Uses LLAMA3_2_1B_SPINQUANT model - **Adapter**: @rn-executorch/bare-adapter - **Dependencies**: Minimal deps, only essential packages ### Platform Configuration #### iOS - Bridging header for RNBackgroundDownloader - Background URL session handling in AppDelegate - Background modes (fetch, processing) - Xcode project configuration #### Android - Required permissions for background downloads - Foreground service configuration - Network state access - Proper manifest configuration ### Infrastructure - Babel configuration for export namespace transform This serves as a reference implementation for using react-native-executorch in bare React Native environments (non-Expo).
|
Lint CI fails (you don't need to worry about the second failing CI ;)). Please could you fix the errors from this CI? |
|
@msluszniak I’m not able to reproduce the lint CI failure locally — everything passes on my side. I’ll take a closer look and investigate further to see what might be causing the discrepancy (environment, cache, or config differences). I’ll follow up with a fix or more details as soon as I find the root cause.
|
|
@rizalibnu Sure thing, maybe the configuration of the CI itself does not work with as it should. We'll also look at this, don't worry :) |
|
@msluszniak Found the issue 👍 Fixed by adding a build step before adapter type checks and bumped Node to 22 in |
…package.json for bare and expo adapters
…r for better error handling
72914c0 to
359427b
Compare
Add explicit resetAdapter() method to ResourceFetcher class for cleaner API. - Add resetAdapter() static method that sets adapter to null - Update cleanupExecutorch() to use resetAdapter() instead of type assertion hack - Update error message to reference new package names (@react-native-executorch/*) This provides a cleaner, type-safe way to reset the adapter without requiring "null as unknown as ResourceFetcherAdapter" type assertion.
…ecking and cleaning
|
Also I think that adding these changes will make section |
|
Hi, I worked a bit with this PR by building bare react native app. From what I've tested I didn't get a single issue with the bare resource fetcher and the integration was also very smooth. To sum up my experience with it was good and I don't have any issues. Thank you @rizalibnu for this amazing piece of code :D |
…el (software-mansion#734) ## Description Currently, there is no other way to set configuration in `LLMModule` other than load model first, and then call `configure` method. This PR make it possible to configure parameters before loading the actual model. ### Introduces a breaking change? - [ ] Yes - [X] No ### Type of change - [ ] Bug fix (change which fixes an issue) - [ ] New feature (change which adds functionality) - [ ] Documentation update (improves or adds clarity to existing documentation) - [X] Other (chores, tests, code style improvements etc.) ### Tested on - [x] iOS - [ ] Android ### Testing instructions Try to run configure on hook returned by `useLLM` and check that everything works. For simplicity, I present the example way how to test it inside our library: * Create the following file in `apps/llm/app/my_test/index.tsx`: ```typescript import { useIsFocused } from '@react-navigation/native'; import React, { useEffect, useState, useRef } from 'react'; import { View, Text, TextInput, TouchableOpacity, FlatList, StyleSheet, ActivityIndicator, KeyboardAvoidingView, Platform, SafeAreaView, } from 'react-native'; import { LLMModule, LLAMA3_2_1B_QLORA } from 'react-native-executorch'; // Define message type for UI type Message = { role: 'user' | 'assistant' | 'system'; content: string; }; export default function VoiceChatScreenWrapper() { const isFocused = useIsFocused(); return isFocused ? <LlamaChat /> : null; } const LlamaChat = () => { const [messages, setMessages] = useState<Message[]>([]); const [input, setInput] = useState(''); const [isModelReady, setIsModelReady] = useState(false); const [loadingProgress, setLoadingProgress] = useState(0); const [isGenerating, setIsGenerating] = useState(false); // Use a ref to keep the LLM instance stable across renders const llmRef = useRef<LLMModule | null>(null); useEffect(() => { // 1. Initialize the LLM Module llmRef.current = new LLMModule({ // Update state whenever history changes (covers both user and bot messages) messageHistoryCallback: (updatedMessages) => { // We cast this to our Message type (assuming the library returns compatible format) setMessages(updatedMessages as Message[]); }, // Optional: Use tokenCallback if you want to trigger haptics or very fine-grained updates tokenCallback: (token) => { // console.log('New token:', token); }, }); // 2. Load the model const loadModel = async () => { try { await llmRef.current?.load(LLAMA3_2_1B_QLORA, (progress) => { setLoadingProgress(progress); }); setIsModelReady(true); } catch (error) { console.error('Failed to load model:', error); } }; loadModel(); llmRef.current?.configure({chatConfig: {systemPrompt: "You are extremely enthusiastic chat assistant that is ecstatic about chatting with me."}}); // 3. Cleanup: Delete model from memory when component unmounts return () => { console.log('Cleaning up LLM...'); llmRef.current?.delete(); }; }, []); const handleSend = async () => { if (!input.trim() || !isModelReady || isGenerating) return; const userText = input; setInput(''); // Clear input immediately setIsGenerating(true); try { // sendMessage automatically updates the history via the callback defined in useEffect await llmRef.current?.sendMessage(userText); } catch (error) { console.error('Error generating response:', error); } finally { setIsGenerating(false); } }; const handleStop = () => { llmRef.current?.interrupt(); setIsGenerating(false); }; // --- Render Helpers --- if (!isModelReady) { return ( <View style={styles.centerContainer}> <ActivityIndicator size="large" color="#007AFF" /> <Text style={styles.loadingText}> Loading Model... {(loadingProgress * 100).toFixed(0)}% </Text> </View> ); } return ( <SafeAreaView style={styles.container}> <KeyboardAvoidingView behavior={Platform.OS === 'ios' ? 'padding' : undefined} style={styles.keyboardContainer} > <FlatList data={messages} keyExtractor={(_, index) => index.toString()} contentContainerStyle={styles.listContent} renderItem={({ item }) => ( <View style={[ styles.bubble, item.role === 'user' ? styles.userBubble : styles.botBubble, ]} > <Text style={item.role === 'user' ? styles.userText : styles.botText}> {item.content} </Text> </View> )} /> <View style={styles.inputContainer}> <TextInput style={styles.input} placeholder="Ask Llama..." value={input} onChangeText={setInput} editable={!isGenerating} /> {isGenerating ? ( <TouchableOpacity onPress={handleStop} style={styles.stopButton}> <Text style={styles.buttonText}>Stop</Text> </TouchableOpacity> ) : ( <TouchableOpacity onPress={handleSend} style={styles.sendButton}> <Text style={styles.buttonText}>Send</Text> </TouchableOpacity> )} </View> </KeyboardAvoidingView> </SafeAreaView> ); }; const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#F5F5F5' }, centerContainer: { flex: 1, justifyContent: 'center', alignItems: 'center' }, loadingText: { marginTop: 10, fontSize: 16, color: 'software-mansion#333' }, keyboardContainer: { flex: 1 }, listContent: { padding: 16 }, bubble: { maxWidth: '80%', padding: 12, borderRadius: 16, marginBottom: 10, }, userBubble: { alignSelf: 'flex-end', backgroundColor: '#007AFF', borderBottomRightRadius: 2, }, botBubble: { alignSelf: 'flex-start', backgroundColor: '#E5E5EA', borderBottomLeftRadius: 2, }, userText: { color: '#FFF', fontSize: 16 }, botText: { color: '#000', fontSize: 16 }, inputContainer: { flexDirection: 'row', padding: 10, borderTopWidth: 1, borderColor: '#DDD', backgroundColor: '#FFF', }, input: { flex: 1, backgroundColor: '#F0F0F0', borderRadius: 20, paddingHorizontal: 16, paddingVertical: 10, fontSize: 16, marginRight: 10, }, sendButton: { backgroundColor: '#007AFF', justifyContent: 'center', alignItems: 'center', paddingHorizontal: 20, borderRadius: 20, }, stopButton: { backgroundColor: '#FF3B30', justifyContent: 'center', alignItems: 'center', paddingHorizontal: 20, borderRadius: 20, }, buttonText: { color: '#FFF', fontWeight: '600' }, }); ``` * Add the following in `apps/llm/app/_layout.txs`: ``` + <Drawer.Screen + name="my_test/index" + options={{ + drawerLabel: 'Llama Chat', + title: 'Llama Chat', + headerTitleStyle: { color: ColorPalette.primary }, + }} + /> ``` * Add the following in `apps/llm/app/index.tsx`: ``` + + <TouchableOpacity + style={styles.button} + onPress={() => router.navigate('my_test/')} + > + <Text style={styles.buttonText}>LLama chat</Text> + </TouchableOpacity> ``` Run llm app and ask about anything. Generation config should work correctly, and now responses of the LLM should be super ecstatic. Now, move this part: ```typescript llmRef.current?.configure({chatConfig: {systemPrompt: "You are extremely enthusiastic chat assistant that is ecstatic about chatting with me."}}); ``` before loading the model and check if everything works correct. ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues <!-- Link related issues here using #issue-number --> ### Checklist - [x] I have performed a self-review of my code - [x] I have commented my code, particularly in hard-to-understand areas - [ ] I have updated the documentation accordingly - [x] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. -->
… param name (software-mansion#801) ## Description This PR changes the param name of from `resize` to `resizeToInput` in image segmentation APIs. It also defaults to true now, as the performance impact is acceptable. ### Introduces a breaking change? - [x] Yes - [ ] No ### Type of change - [ ] Bug fix (change which fixes an issue) - [ ] New feature (change which adds functionality) - [ ] Documentation update (improves or adds clarity to existing documentation) - [x] Other (chores, tests, code style improvements etc.) ### Tested on - [ ] iOS - [ ] Android ### Testing instructions <!-- Provide step-by-step instructions on how to test your changes. Include setup details if necessary. --> ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues <!-- Link related issues here using #issue-number --> ### Checklist - [ ] I have performed a self-review of my code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have updated the documentation accordingly - [ ] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. --> --------- Co-authored-by: Mateusz Sluszniak <56299341+msluszniak@users.noreply.github.com>
## Description This PR changes binaries to include new tokenizer functionalities. Added: - Wordpiece model and decoder - Bert and Roberta tokenization is supported - Padding and truncation from tokenizer.json is now respected ### Introduces a breaking change? - [ ] Yes - [x] No ### Type of change - [x] Bug fix (change which fixes an issue) - [ ] New feature (change which adds functionality) - [ ] Documentation update (improves or adds clarity to existing documentation) - [ ] Other (chores, tests, code style improvements etc.) ### Tested on - [x] iOS - [x] Android ### Testing instructions Run the test suites. Run all apps that use tokenizers and verify they load and produce proper output (LLM, S2T, T2I, Embeddings etc.) ### Checklist - [x] I have performed a self-review of my code ### Additional notes Running the tests can yield some issues. Couldn't get to why they happen. Calling failing functions in example apps yields proper results. Probably some issue with test environment. We decided to not hold this PR due to failing TC's and investigate them later on.
<!-- Provide a concise and descriptive summary of the changes implemented in this PR. --> - [x] Yes - [ ] No This PR introduces breaking change as now the return type from `transcribe` and `stream` methods are based on `TranscriptionResult` type. Also now there is no commited / nonCommited properties of hook. `stream` now is async generator. - [ ] Bug fix (change which fixes an issue) - [x] New feature (change which adds functionality) - [ ] Documentation update (improves or adds clarity to existing documentation) - [ ] Other (chores, tests, code style improvements etc.) - [x] iOS - [x] Android * Run demo app in `apps/speech` and run transcription for both time stamping and regular mode (both from url and from real time audio to test both `transcribe` and `stream` methods). * Run voice chat in `apps/llm` to check if transcription appears. *NOTE* This example seems to be a bit buggy. * You need to run this on **android device** since this PR also fixes `Speech to Text` demo app in case of using physical android device. Earlier, required permissions for microphone weren't granted and the example effectively didn't work. * Check that documentation for modified sections is updated and that api reference is correct as well. * Run tests and check that they compile and work as previously. <!-- Add screenshots here, if applicable --> <!-- Link related issues here using #issue-number --> - [x] I have performed a self-review of my code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have updated the documentation accordingly - [x] My changes generate no new warnings <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. -->
|
The last thing from my side is to add |

Description
This PR introduces modular resource fetcher adapters to support both Expo and bare React Native environments, replacing the previous monolithic approach with a flexible, platform-specific architecture.
Key Changes
New Adapter Packages:
Initialization Changes:
Documentation Updates:
Introduces a breaking change?
Migration Required:
Users must now explicitly initialize the library with a resource fetcher adapter:
Type of change
Tested on
Testing instructions
For Expo projects:
yarn add @rn-executorch/expo-adapter expo-file-system expo-assetinitExecutorch({ resourceFetcher: ExpoResourceFetcher })For bare React Native projects:
yarn add @rn-executorch/bare-adapter @dr.pogodin/react-native-fs @kesha-antonov/react-native-background-downloaderinitExecutorch({ resourceFetcher: BareResourceFetcher })Screenshots
Related issues
Closes #549
Checklist
Additional notes
Why This Change:
Split Into Multiple PRs:
To make review easier, this work has been split:
BREAKING CHANGE:
initExecutorch()with explicit adapter selection is now required before using any react-native-executorch hooks. Users must install and configure either@rn-executorch/expo-adapteror@rn-executorch/bare-adapterdepending on their project type.